Serverless Architecture editorial team: Serverless as a term is a rather controversial buzzword: Servers are still in use. In addition, everyone seems to understand serverless as something different (i.e. FaaS or BaaS). What is serverless for you personally?
Lee Atchison: For me, the most useful definition of serverless is any platform-level service that provides capabilities to its customers without the customers needing to know or understand the underlying infrastructure that is running the service.
Using this definition as a baseline, FaaS offerings such as AWS Lambda would certainly qualify, but, on the same note, Amazon S3 would be considered as a serverless offering as well.
Amazon S3 provides storage capability as a service without the consumer needing any knowledge or understanding of the underlying infrastructure that operates the service.
The best way to look at the distinction between server based and serverless services is to look at the difference between Amazon DynamoDB and Amazon RDS.
Both of them are database offerings. DynamoDB is serverless because you do not have to know or understand the underlying server architecture. Amazon RDS is server-based, because you have to know and understand the servers where the service operates, and with that understanding comes a requirement for you to configure to size the servers properly and build in appropriate redundancy and backups. These two services are a great way to compare server-based and serverless services.
Examples of AWS Serverless services:
- AWS Lambda
- DynamoDB
- SNS/SQS
- CloudFront
Examples of AWS Server-based services:
- RDS
- ElastiCache
- ECS/EKS
- Elastic Beanstalk
Overall, it is easier to configure and scale serverless services than it is server-based services.
Serverless Architecture editorial team: From a developer’s point of view, serverless has many advantages. One is that you practically don’t have to worry about the infrastructure anymore. In your opinion, how does serverless change the everyday life of developers?
Lee Atchison: Developers, especially those that are building, deploying, and operating services using DevOps methodologies, benefit greatly from cloud-based infrastructure services. Serverless technologies, in particular, enable using and scaling infrastructure services more efficient for your everyday developer. Furthermore, developers can now take advantage of managed services that allow them to build everything from simple event-based workloads to complex video pipelines, all without managing the intricacies of running any servers. The low overhead to start using these services and the minimal level of effort required to build and maintain production workloads on them give software engineers the opportunity to speed innovation.
However, that does not mean that all problems are solved for the developer. As a simple example, users of AWS Lambda still have to consider performance and scaling because the performance characteristics of a particular AWS Lambda function could vary depending on the use patterns of the function and at what scale it operates. These variances can have an adverse impact on the ease and convenience of operating serverless based infrastructures. Put simply, just because you don’t have to scale your servers does not mean you don’t have to think about and deal with scaling issues.
Put simply, just because you don’t have to scale your servers does not mean you don’t have to think about and deal with scaling issues.
Serverless Architecture editorial team: Developers aren’t the only ones affected by the new model, especially when you think of DevOps: What are the consequences of the serverless approach for Operators/System Administrators?
Lee Atchison: The same challenges that affect the developer also affect the modern operations personnel. As we move forward, these roles are becoming very much intertwined in the world of DevOps.
However, the need for infrastructure operations management will continue to be essential. Operations teams will still be responsible for making sure the environments being created are properly configured for failover. Additionally, Ops will be responsible for monitoring cloud services which will give them essential information to improve future cloud service usage. Operations teams are uniquely suited to support organizations during this next phase of the serverless journey.
Serverless Architecture editorial team: In the context of DevOps, observability and monitoring are very important topics. How does observability work in serverless applications, since you are not in charge of the underlying infrastructure anymore as an Operator?
Lee Atchison: Observability is much more involved in the end-to-end, full-stack application view as well as the low-level details of the infrastructure that it is running on. Observability capabilities, such as tracing, are becoming significantly more important. Additionally, capabilities to combine high-level observability patterns (such as application tracing) along with lower level monitoring capabilities (such as server monitoring, Kubernetes/container monitoring, and cloud service monitoring) in a single, unified view that allows you to compare and identify causal relationships is becoming more and more critical.
For example, a service may be failing its SLA’s due to a back-end dependency. What observability enables teams to do is, quickly see and understand that dependency is using a cloud service that is currently experiencing issues. It is critical that teams have this ability in order to keep their services up and running.
SEE ALSO: Grails 4 – Groovy goes serverless with Micronaut
Serverless Architecture editorial team: Simon Wardley has put forward the thesis that containers and Kubernetes are only a marginal phenomenon in the history of software development and could soon become obsolete, since “serverless is eating the world.” What do you think of that?
Lee Atchison: Serverless technologies enable engineers to build modern applications with increased agility and lower total cost of ownership. Building serverless applications means that your developers can focus more of their time on delivering the core product, rather than managing and operating servers or runtimes. This reduced overhead lets developers reclaim time and energy that can be spent on developing great products which scale and that are reliable.
Container technology will also be a major piece of the puzzle that will be a critical component in building and maintaining modern applications in the future.
Serverless Architecture editorial team: Finally, a brief look into the crystal ball: What role will serverless play in 2020?
Lee Atchison: Serverless will be one of the critical services that, together with other modern technologies, can be described as “Dynamic Infrastructures.” Overall, dynamic infrastructures will be used to build highly scalable, highly available, service-based applications that will form the heart of our modern world.
In the next year, expect serverless to become more mainstream in enterprise applications and more integrated with other technologies such as microservices and traditional application architectures. Serverless won’t necessarily replace the need for traditional computation, and we will see significant growth in Kubernetes and container-as-a-service models in the future.
Thank you!